Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
随着图形数据变得越来越普遍,在这些复杂的数据域中进行可靠的推论图算法的需求至关重要。在许多感兴趣的情况下,存在对抗数据污染的情况更加复杂。对手的效果通常是以负面影响统计和算法性能的方式改变数据分布。我们在顶点提名的背景下研究这种现象,这是网络数据的半监督信息检索任务。在这里,一系列常见的方法依赖于光谱图嵌入,这些嵌入式既可以提供良好的算法性能和灵活的设置,在该设置中可以实现正则化技术以帮助减轻对手的效果。许多当前的正则化方法依赖于直接网络修剪来有效消除对抗性污染,尽管这种直接修剪通常会导致所得图中的复杂依赖性结构。我们提出了一种在模型空间中运行的新修剪方法,该方法可以解决块结构污染和白噪声污染(污染的分布未知)。与直接修剪相比,该模型修剪更适合理论分析,同时也证明了许多模拟的性能。
translated by 谷歌翻译
我们提出了一个开放域的社交聊天机器人Chirpy Cardinal。为了既有信息又有信息,我们的机器人以一种真实的,情感上的方式与用户聊天。通过将受控的神经产生与脚手架,手写的对话整合在一起,我们让用户和机器人都轮流推动对话,从而产生引人入胜且流利的体验。Chirpy Cardinal部署在Alexa奖Socialbot Grand Challenge的第四次迭代中,每天处理数千次对话,在9个机器人中排名第二,平均用户评级为3.58/5。
translated by 谷歌翻译
随机奇异值分解(RSVD)是用于计算大型数据矩阵截断的SVD的一类计算算法。给定A $ n \ times n $对称矩阵$ \ mathbf {m} $,原型RSVD算法输出通过计算$ \ mathbf {m mathbf {m} $的$ k $引导singular vectors的近似m}^{g} \ mathbf {g} $;这里$ g \ geq 1 $是一个整数,$ \ mathbf {g} \ in \ mathbb {r}^{n \ times k} $是一个随机的高斯素描矩阵。在本文中,我们研究了一般的“信号加上噪声”框架下的RSVD的统计特性,即,观察到的矩阵$ \ hat {\ mathbf {m}} $被认为是某种真实但未知的加法扰动信号矩阵$ \ mathbf {m} $。我们首先得出$ \ ell_2 $(频谱规范)和$ \ ell_ {2 \ to \ infty} $(最大行行列$ \ ell_2 $ norm)$ \ hat {\ hat {\ Mathbf {M}} $和信号矩阵$ \ Mathbf {M} $的真实单数向量。这些上限取决于信噪比(SNR)和功率迭代$ g $的数量。观察到一个相变现象,其中较小的SNR需要较大的$ g $值以保证$ \ ell_2 $和$ \ ell_ {2 \ to \ fo \ infty} $ distances的收敛。我们还表明,每当噪声矩阵满足一定的痕量生长条件时,这些相变发生的$ g $的阈值都会很清晰。最后,我们得出了近似奇异向量的行波和近似矩阵的进入波动的正常近似。我们通过将RSVD的几乎最佳性能保证在应用于三个统计推断问题的情况下,即社区检测,矩阵完成和主要的组件分析,并使用缺失的数据来说明我们的理论结果。
translated by 谷歌翻译
我们通过证明PABM是GRDPG的一种特殊情况,其中社区对应于潜在矢量的相互正交子空间,我们连接两个随机图模型,即受欢迎程度调整块模型(PABM)和广义随机点产品图(GRDPG)。这种见解使我们能够为PABM构建用于社区检测和参数估计的新算法,并改善了依赖稀疏子空间聚类的现有算法。利用邻接光谱嵌入GRDPG的渐近特性,我们得出了这些算法的渐近特性。特别是,我们证明,随着图形顶点的数量倾向于无穷大,社区检测误差的绝对数量趋于零。仿真实验说明了这些特性。
translated by 谷歌翻译
随机点产品图(RDPG)是网络的生成模型,其中顶点对应于潜像欧几里德空间中的位置,并且由潜在位置的点产品确定。我们考虑从潜在空间的未知$ 1 $ 1多维二维子段中随机采样潜在位置的RDPG。原则上,限制推理,即利用子苗条结构的程序,应该比不受限制的推断更有效;然而,当子苗条未知时,尚不清楚如何进行限制推理。我们提出了用于歧管学习的技术可用于学习空气的未知子多种,以实现从受限推断的益处。为了说明,我们使用完整的一组顶点来测试1美元的FR \'{e} CHET手段的1美元 - 和2美元的假设,以推断潜伏结构。我们建议测试统计数据,用于使用从估计的潜在位置构造的邻域图上的最短路径距离来部署ISOMAP过程,以估计未知$ 1 $ -dimenmanifold上的弧长。与ISOMAP的常规应用不同,估计的潜在位置不介绍感兴趣的子群。我们将现有的收敛结果扩展到ISOMAP到此设置,并使用它们来证明,随着辅助顶点的数量增加,我们的测试的功率会收敛于当已知子纤维的相应测试的功率。最后,我们将方法应用于推理问题,这是在研究果蝇幼虫蘑菇体的结核时。单变量学习歧管测试拒绝($ P <0.05 $),而多变量环境空间测试没有($ p \ gg0.05 $),说明了识别和利用后续推断的低维结构的值。
translated by 谷歌翻译
光谱嵌入是可用于获得图形节点的矢量表示的过程。本文提出了称为随机点产品图的潜在网络模型的概括,以允许将这些载体表示的解释为潜在位置估计。需要泛化异化连接(例如,“对立面”)并更普遍地应对负特征值。我们表明,是否使用邻接或归一化的拉普拉斯矩阵,光谱嵌入产生均匀一致的潜在估计,渐近高斯误差(最高可识别性)。标准和混合会员随机块模型是特殊情况,其中潜在的位置只需要k $不同的向量值,代表社区,或以$(k-1)$ - simplex与那些顶点一起生活。在随机块模型下,我们的理论建议使用高斯混合模型(而不是$ k $ -means),并且根据混合成员资格,拟合封闭单纯x的最小卷,此前仅在非负面明确假设下支持的现有建议。在网络安全示例中,在网络安全示例中证明了链路预测(在随机点产品图中)的经验改进(在随机点产品图中),以及露出更丰富的潜在结构(比标准或混合隶属块模型的位置)。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译